Goto

Collaborating Authors

 google unveil


Google unveils 'AI Mode' in the next phase of its journey to change search

The Guardian

Google on Tuesday unleashed another wave of artificial intelligence technology to accelerate a year-long makeover of its search engine that is changing the way people get information and curtailing the flow of internet traffic to other websites. The next phase outlined at Google's annual developers conference includes releasing a new "AI mode" option in the United States. The company says the feature will make interacting with its search engine more like having a conversation with an expert capable of answering a wide array of questions. AI mode is being offered to all users in the US just two-and-a-half months after the company began testing with a limited Labs division audience. Google is also feeding its latest AI model, Gemini 2.5, into its search algorithms and will soon begin testing other AI features, such as the ability to automatically buy concert tickets and conduct searches through live video feeds.


Google unveils 'mindboggling' quantum computing chip

The Guardian

It measures just 4cm squared but it possesses almost inconceivable speed. That's 10 septillion years, a number that far exceeds the age of our known universe and has the scientists behind the latest quantum computing breakthrough reaching for a distinctly non-technical term: "mindboggling". The new chip, called Willow and made in the California beach town of Santa Barbara, is about the dimensions of an After Eight mint, and could supercharge the creation of new drugs by greatly speeding up the experimental phase of development. Reports of its performance follow a flurry of results since 2021 that suggest we are only about five years away from quantum computing becoming powerful enough to start transforming humankind's capabilities to research and develop new materials from drugs to batteries, one independent UK expert said. Governments around the world are pouring tens of billions of dollars into research.


Google unveils its multilingual, code-generating PaLM 2 language model

Engadget

Google has stood at the forefront at many of the tech industry's AI breakthroughs in recent years, Zoubin Ghahramani, Vice President of Google DeepMind, declared in a blog post while asserting that the company's work in foundation models, are "the bedrock for the industry and the AI-powered products that billions of people use daily." On Wednesday, Ghahramani and other Google executives took the Shoreline Amphitheater stage to show off its latest and greatest large language model, PaLM 2, which now comes in four sizes able to run locally on everything from mobile devices to server farms. PaLM 2, obviously, is the successor to Google's existing PaLM model that, until recently, powered its experimental Bard AI. "Think of PaLM as a general model that then can be fine tuned to achieve particular tasks," he explained during a reporters call earlier in the week. "For example: health research teams have fine tuned PaLM with with medical knowledge to help answer questions and summarize insights from a variety of dense medical texts." Ghahramani also notes that PaLM was "the first large language model to perform an expert level on the US medical licensing exam."



Google unveils new 10-shade skin tone scale to test its AI for bias

#artificialintelligence

Alphabet Inc's Google unveiled a palette of 10 skin tones on Wednesday that it described as a step forward in making gadgets and apps that better serve people of colour. The company said its new Monk Skin Tone Scale replaces a flawed standard of six colours known as the Fitzpatrick Skin Type, which had become popular in the tech industry to assess whether smartwatch heart rate sensors, artificial intelligence systems including facial recognition, and other offerings show colour bias. Tech researchers acknowledged that Fitzpatrick underrepresented people with darker skin. Reuters exclusively reported last year that Google was developing an alternative. The company partnered with Harvard University sociologist Ellis Monk, who studies colourism and had felt dehumanised by cameras that failed to detect his face and reflect his skin tone.


Google unveils the world's largest publicly available machine learning hub

#artificialintelligence

Google I/O 2022, Google's largest developer conference, kicked off with a keynote speech from Alphabet CEO Sundar Pichai. The keynote speech had major announcements including the launch of Pixel watch, updates on PaLM and LaMDA, advancements in AR and immersive technology etc. Let us look at the key highlights. "Recently we announced plans to invest USD 9.5 billion in data centers and offices across the US. One of our state-of-the-art data centers is in Mayes County, Oklahoma. I'm excited to announce that, there, we are launching the world's largest, publicly-available machine learning hub for our Google Cloud customers," Sundar Pichai said.


Google unveils 'Pathways', a next-gen AI that can be trained to multitask

#artificialintelligence

Today's AI models, according to Google's AI lead and co-founder of the Google Brain project, Jeff Dean, are at the one-trick pony phase – they are "typically trained to do only one thing". But a new approach called Pathways could provide something akin to a trainable dog that can do multiple tricks. Dean describes Pathways as a "next-generation AI architecture" that "will enable us to train a single model to do thousands or millions of things." Pathways can remove the limits of an AI model's capacity to respond to information from just one sense and allow it to respond to multiple senses, such as text, images and speech. "Pathways could enable multimodal models that encompass vision, auditory, and language understanding simultaneously," Dean explains.


Google Unveils $49 Nest Mini with Better Sound, Dedicated Machine Learning Chip

#artificialintelligence

You'd be hard-pressed to tell the Nest Mini apart from the 2017 Google Home Mini-- one of the best Google Home Speakers to use with Google …


Google unveils 'mini Disneyland' as it reveals AI 'Interpreter Mode' and brings Assistant to Maps

Daily Mail - Science & tech

Google is going all in on its AI assistant. In an elaborate exhibit at CES, complete with a Disneyland-style ride, the firm showed off a slew of impressive new functions for Google Assistant, including Interpreter Mode to translate dozens of languages in real-time, and integration with its Maps app. Google also showed off the new Lenovo Smart Clock, which can set alarms based on your daily habits or calendar appointments, and wake you with a gentle light. The Silicon Valley giant took the wraps off the latest updates to Assistant on Tuesday as it officially opened its 18,000-square-foot booth, which relies on an amusement park-style ride designed in the style of Disney's'A Small World' to illustrate how Google Assistant can make daily tasks simpler. Try out the 360-degree video of Google's Disneyland-style ride below This is only Google's second year exhibiting at CES.


In a major breakthrough, Google unveils an AI that learns on its own

#artificialintelligence

While AlphaGo Zero's Go capabilities are to be praised, it should be noted that playing a board game is much different than completing other tasks that have more variables. As Eleni Vasilaki, professor of computational neuroscience at Sheffield University, put it while speaking with The Guardian, "AI fails in tasks that are surprisingly easy for humans. Just look at the performance of a humanoid robot in everyday tasks such as walking, running, and kicking a ball." When it comes to matching humans at more complex tasks, AI still has a long way to go -- even AI like Siri and the Google Assistant have yet to surpass a fifth grader's level of knowledge. DeepMind CEO Demis Hassabis is also well aware of this gap between humans and AI, explaining how the development and growth of AlphaGo was more important than just mastering an ancient game, it was also "a big step for us towards building these general-purpose algorithms."